Locally Conditioned Belief Propagation
نویسندگان
چکیده
Conditioned Belief Propagation (CBP) is an algorithm for approximate inference in probabilistic graphical models. It works by conditioning on a subset of variables and solving the remainder using loopy Belief Propagation. Unfortunately, CBP’s runtime scales exponentially in the number of conditioned variables. Locally Conditioned Belief Propagation (LCBP) approximates the results of CBP by treating conditions locally, and in this way avoids the exponential blow-up. We formulate LCBP as a variational optimization problem and derive a set of update equations that can be used to solve it. We show empirically that LCBP delivers results that are close to those obtained from CBP, while the computational cost scales favorably with problem size.
منابع مشابه
Comments for “Choosing a Variable to Clamp: Approximate Inference Using Conditioned Belief Propagation”
Abstract This document will mainly be of interest to those who are re-implementing or extending “Choosing a Variable to Clamp” [1]. Most of the text is concerned with demonstrating a fairly straightforward isomorphism between forward and reverse-mode automatic differentiation applied to the belief propagation algorithm, in section 3. This shows that “Back-Belief Propagation” of [1] is performin...
متن کاملChoosing a Variable to Clamp: Approximate Inference Using Conditioned Belief Propagation
In this paper we propose an algorithm for approximate inference on graphical models based on belief propagation (BP). Our algorithm is an approximate version of Cutset Conditioning, in which a subset of variables is instantiated to make the rest of the graph singly connected. We relax the constraint of single-connectedness, and select variables one at a time for conditioning, running belief pro...
متن کاملDiscovering Morphological Paradigms from Plain Text Using a Dirichlet Process Mixture Model
We present an inference algorithm that organizes observed words (tokens) into structured inflectional paradigms (types). It also naturally predicts the spelling of unobserved forms that are missing from these paradigms, and discovers inflectional principles (grammar) that generalize to wholly unobserved words. Our Bayesian generative model of the data explicitly represents tokens, types, inflec...
متن کاملLearning unbelievable marginal probabilities
Loopy belief propagation performs approximate inference on graphical models with loops. One might hope to compensate for the approximation by adjusting model parameters. Learning algorithms for this purpose have been explored previously, and the claim has been made that every set of locally consistent marginals can arise from belief propagation run on a graphical model. On the contrary, here we...
متن کاملLearning unbelievable probabilities
Loopy belief propagation performs approximate inference on graphical models with loops. One might hope to compensate for the approximation by adjusting model parameters. Learning algorithms for this purpose have been explored previously, and the claim has been made that every set of locally consistent marginals can arise from belief propagation run on a graphical model. On the contrary, here we...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2015